Results for 'Deborah G. Kemler Nelson'

1000+ found
Order:
  1.  20
    Was it designed to do that? Children’s focus on intended function in their conceptualization of artifacts.Yvonne M. Asher & Deborah G. Kemler Nelson - 2008 - Cognition 106 (1):474-483.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  25
    Young children's use of functional information to categorize artifacts: three factors that matter.Deborah G. Kemler Nelson, Anne Frankenfield, Catherine Morris & Elizabeth Blair - 2000 - Cognition 77 (2):133-168.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  3.  30
    Clauses are perceptual units for young infants.Kathy Hirsh-Pasek, Deborah G. Kemler Nelson, Peter W. Jusczyk, Kimberly Wright Cassidy, Benjamin Druss & Lori Kennedy - 1987 - Cognition 26 (3):269-286.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  4.  25
    Was it designed to do that? Children’s focus on intended function in their conceptualization of artifacts.Yvonne M. Asher & Deborah G. Kemler Nelson - 2008 - Cognition 106 (1):474-483.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  5.  15
    What child is this? What interval was that? Familiar tunes and music perception in novice listeners.J. David Smith, Deborah G. Kemler Nelson, Lisa A. Grohskopf & Terry Appleton - 1994 - Cognition 52 (1):23-54.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  6. STEVEN A. SLOMAN (Brown University, Providence) When explanations compete: the role of explanatory coherence on judgements of likelihood, 1-21.J. David Smith, Deborah G. Kemler, Lisa A. Grohskopf Nelson, Terry Appleton, Mary K. Mullen, Judy S. Deloache, Nancy M. Burns, Kevin B. Korb, Robert L. Goldstone & Jean E. Andruski - 1994 - Cognition 52 (251):251.
    No categories
     
    Export citation  
     
    Bookmark  
  7.  21
    Does sentential prosody help infants organize and remember speech information?Denise R. Mandel, Peter W. Jusczyk & Deborah G. Kemler Nelson - 1994 - Cognition 53 (2):155-180.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  8.  19
    Learning and transfer of dimensional relevance and irrelevance in children.Deborah G. Kemler & Bryan E. Shepp - 1971 - Journal of Experimental Psychology 90 (1):120.
  9.  27
    Selective attention and dimensional learning: A logical analysis of two-stage attention theories.Daniel R. Anderson, Deborah G. Kemler & Bryan E. Shepp - 1973 - Bulletin of the Psychonomic Society 2 (5):273-275.
  10. Campbell, JID, I Chan, D., 217.F. Chua, Y. Kareev, D. G. Kemler Nelson, G. S. Dell, A. Diamond, G. Doherty, D. R. Mandel, C. A. Sevald, S. Garrod & V. Weichbold - 1993 - Cognition 53:265.
    No categories
     
    Export citation  
     
    Bookmark   2 citations  
  11.  8
    Selective attention and the breadth of learning: An extension of the one-look model.Bryan E. Shepp, Deborah G. Kemler & Daniel R. Anderson - 1972 - Psychological Review 79 (4):317-328.
  12. Loman, MM, B15.E. Blair, W. C. Chiang, L. Cosmides, C. Drake, J. Evans, L. Fiddick, A. Frankenfield, S. J. Handley, M. R. Jones & D. G. Kemler Nelson - 2000 - Cognition 77:289.
     
    Export citation  
     
    Bookmark  
  13.  15
    Error and the Growth of Experimental Knowledge.Deborah G. Mayo - 1996 - University of Chicago.
    This text provides a critique of the subjective Bayesian view of statistical inference, and proposes the author's own error-statistical approach as an alternative framework for the epistemology of experiment. It seeks to address the needs of researchers who work with statistical analysis.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   221 citations  
  14. Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  15. Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  16. Novel evidence and severe tests.Deborah G. Mayo - 1991 - Philosophy of Science 58 (4):523-552.
    While many philosophers of science have accorded special evidential significance to tests whose results are "novel facts", there continues to be disagreement over both the definition of novelty and why it should matter. The view of novelty favored by Giere, Lakatos, Worrall and many others is that of use-novelty: An accordance between evidence e and hypothesis h provides a genuine test of h only if e is not used in h's construction. I argue that what lies behind the intuition that (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   56 citations  
  17. Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science.Deborah G. Mayo & Aris Spanos (eds.) - 2009 - New York: Cambridge University Press.
    Although both philosophers and scientists are interested in how to obtain reliable knowledge in the face of error, there is a gap between their perspectives that has been an obstacle to progress. By means of a series of exchanges between the editors and leaders from the philosophy of science, statistics and economics, this volume offers a cumulative introduction connecting problems of traditional philosophy of science to problems of inference in statistical and empirical modelling practice. Philosophers of science and scientific practitioners (...)
  18. Methodology in Practice: Statistical Misspecification Testing.Deborah G. Mayo & Aris Spanos - 2004 - Philosophy of Science 71 (5):1007-1025.
    The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophical scrutiny can help disentangle 'practical' problems of model validation, and conversely, (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  19. Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  20. Models of group selection.Deborah G. Mayo & Norman L. Gilinsky - 1987 - Philosophy of Science 54 (4):515-538.
    The key problem in the controversy over group selection is that of defining a criterion of group selection that identifies a distinct causal process that is irreducible to the causal process of individual selection. We aim to clarify this problem and to formulate an adequate model of irreducible group selection. We distinguish two types of group selection models, labeling them type I and type II models. Type I models are invoked to explain differences among groups in their respective rates of (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  21. Reframing AI Discourse.Deborah G. Johnson & Mario Verdicchio - 2017 - Minds and Machines 27 (4):575-590.
    A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  22. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  23.  99
    AI, agency and responsibility: the VW fraud case and beyond.Deborah G. Johnson & Mario Verdicchio - 2019 - AI and Society 34 (3):639-647.
    The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  24. Experimental practice and an error statistical account of evidence.Deborah G. Mayo - 2000 - Philosophy of Science 67 (3):207.
    In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly appeal (e.g., error probabilities). Building on my co-symposiasts contributions, I suggest some directions in which a (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  25.  57
    Evidence as Passing Severe Tests: Highly Probable versus Highly Probed Hypotheses.Deborah G. Mayo - 2005 - In P. Achinstein (ed.), Scientific Evidence: Philosophical Theories & Applications. The Johns Hopkins University Press. pp. 95--128.
  26.  91
    Duhem's problem, the bayesian way, and error statistics, or "what's belief got to do with it?".Deborah G. Mayo - 1997 - Philosophy of Science 64 (2):222-244.
    I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing auxiliary (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  27.  26
    Principles of inference and their consequences.Deborah G. Mayo & Michael Kruse - 2001 - In David Corfield & Jon Williamson (eds.), Foundations of Bayesianism. Kluwer Academic Publishers. pp. 381--403.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   18 citations  
  28.  53
    Computers as surrogate agents.Deborah G. Johnson & Thomas M. Powers - 2008 - In M. J. van den Joven & J. Weckert (eds.), Information Technology and Moral Philosophy. Cambridge University Press. pp. 251.
  29.  36
    Frequentist statistics as a theory of inductive inference.Deborah G. Mayo & David Cox - 2006 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press.
    After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of p-values rather than as formal procedures for ‘acceptance‘ and ‘rejection‘. A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of p- values in the specific circumstances of each application, as contrasted with a long-run interpretation. A number of more complicated situ- ations are discussed (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  30. Computer systems and responsibility: A normative look at technological complexity.Deborah G. Johnson & Thomas M. Powers - 2005 - Ethics and Information Technology 7 (2):99-107.
    In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  31. The New Experimentalism, Topical Hypotheses, and Learning from Error.Deborah G. Mayo - 1994 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:270-279.
    An important theme to have emerged from the new experimentalist movement is that much of actual scientific practice deals not with appraising full-blown theories but with the manifold local tasks required to arrive at data, distinguish fact from artifact, and estimate backgrounds. Still, no program for working out a philosophy of experiment based on this recognition has been demarcated. I suggest why the new experimentalism has come up short, and propose a remedy appealing to the practice of standard error statistics. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  32.  73
    How to discount double-counting when it counts: Some clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  33.  40
    Learning from error, severe testing, and the growth of theoretical knowledge.Deborah G. Mayo - 2009 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press. pp. 28.
  34.  91
    Peircean Induction and the Error-Correcting Thesis.Deborah G. Mayo - 2005 - Transactions of the Charles S. Peirce Society 41 (2):299 - 319.
  35.  72
    An objective theory of statistical testing.Deborah G. Mayo - 1983 - Synthese 57 (3):297 - 340.
    Theories of statistical testing may be seen as attempts to provide systematic means for evaluating scientific conjectures on the basis of incomplete or inaccurate observational data. The Neyman-Pearson Theory of Testing (NPT) has purported to provide an objective means for testing statistical hypotheses corresponding to scientific claims. Despite their widespread use in science, methods of NPT have themselves been accused of failing to be objective; and the purported objectivity of scientific claims based upon NPT has been called into question. The (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  36. Behavioristic, evidentialist, and learning models of statistical testing.Deborah G. Mayo - 1985 - Philosophy of Science 52 (4):493-516.
    While orthodox (Neyman-Pearson) statistical tests enjoy widespread use in science, the philosophical controversy over their appropriateness for obtaining scientific knowledge remains unresolved. I shall suggest an explanation and a resolution of this controversy. The source of the controversy, I argue, is that orthodox tests are typically interpreted as rules for making optimal decisions as to how to behave--where optimality is measured by the frequency of errors the test would commit in a long series of trials. Most philosophers of statistics, however, (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  37. Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-Eye View of Popper’s Demarcation of Science.Deborah G. Mayo - 1996 - British Journal for the Philosophy of Science 47 (2):271-290.
    Kuhn maintains that what marks the transition to a science is the ability to carry out ‘normal’ science—a practice he characterizes as abandoning the kind of testing that Popper lauds as the hallmark of science. Examining Kuhn's own contrast with Popper, I propose to recast Kuhnian normal science. Thus recast, it is seen to consist of severe and reliable tests of low-level experimental hypotheses (normal tests) and is, indeed, the place to look to demarcate science. While thereby vindicating Kuhn on (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  38.  40
    Computer Ethics.Deborah G. Johnson - 2004 - In Luciano Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information. Oxford, UK: Blackwell. pp. 63–75.
    The prelims comprise: Introduction Metatheoretical and Methodological Issues Applied and Synthetic Ethics Traditional and Emerging Issues Conclusion Websites and Other Resources.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  39.  31
    Cartwright, Causality, and Coincidence.Deborah G. Mayo - 1986 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1986:42 - 58.
    Cartwright argues for being a realist about theoretical entities but non-realist about theoretical laws. Her reason is that while the former involves causal explanation, the latter involves theoretical explanation; and inferences to causes, unlike inferences to theories, can avoid the redundancy objection--that one cannot rule out alternatives that explain the phenomena equally well. I sketch Cartwright's argument for inferring the most probable cause, focusing on Perrin's inference to molecular collisions as the cause of Brownian motion. I argue that either the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  40.  26
    Should computer programs be owned?Deborah G. Johnson - 1985 - Metaphilosophy 16 (4):276-288.
  41.  36
    Response to Howson and Laudan.Deborah G. Mayo - 1997 - Philosophy of Science 64 (2):323-333.
    A toast is due to one who slays Misguided followers of Bayes, And in their heart strikes fear and terror With probabilities of error! (E.L. Lehmann).
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42. An ad hoc save of a theory of adhocness? Exchanges with John Worrall.Deborah G. Mayo - 2009 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press.
  43. Can scientific theories be warranted with severity? Exchanges with Alan Chalmers.Deborah G. Mayo - 2009 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press.
  44.  64
    Computer ethics: philosophical enquiry.Deborah G. Johnson, James H. Moor & Herman T. Tavani - 2000 - Acm Sigcas Computers and Society 30 (4):6-9.
  45.  37
    Is the global information infrastructure a democratic technology?Deborah G. Johnson - 1997 - Acm Sigcas Computers and Society 27 (3):20-26.
  46. Did Pearson reject the Neyman-Pearson philosophy of statistics?Deborah G. Mayo - 1992 - Synthese 90 (2):233 - 262.
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  47.  8
    Everyday Utopias: The Conceptual Life of Promising Spaces by Davina Cooper.G. Martin Deborah - 2016 - philoSOPHIA: A Journal of Continental Feminism 6 (1):146-150.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  55
    Error statistics and learning from error: Making a virtue of necessity.Deborah G. Mayo - 1997 - Philosophy of Science 64 (4):212.
    The error statistical account of testing uses statistical considerations, not to provide a measure of probability of hypotheses, but to model patterns of irregularity that are useful for controlling, distinguishing, and learning from errors. The aim of this paper is (1) to explain the main points of contrast between the error statistical and the subjective Bayesian approach and (2) to elucidate the key errors that underlie the central objection raised by Colin Howson at our PSA 96 Symposium.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  49.  66
    In defense of the Neyman-Pearson theory of confidence intervals.Deborah G. Mayo - 1981 - Philosophy of Science 48 (2):269-280.
    In Philosophical Problems of Statistical Inference, Seidenfeld argues that the Neyman-Pearson (NP) theory of confidence intervals is inadequate for a theory of inductive inference because, for a given situation, the 'best' NP confidence interval, [CIλ], sometimes yields intervals which are trivial (i.e., tautologous). I argue that (1) Seidenfeld's criticism of trivial intervals is based upon illegitimately interpreting confidence levels as measures of final precision; (2) for the situation which Seidenfeld considers, the 'best' NP confidence interval is not [CIλ] as Seidenfeld (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  50.  21
    On After-Trial Criticisms of Neyman-Pearson Theory of Statistics.Deborah G. Mayo - 1982 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1982:145 - 158.
    Despite its widespread use in science, the Neyman-Pearson Theory of Statistics (NPT) has been rejected as inadequate by most philosophers of induction and statistics. They base their rejection largely upon what the author refers to as after-trial criticisms of NPT. Such criticisms attempt to show that NPT fails to provide an adequate analysis of specific inferences after the trial is made, and the data is known. In this paper, the key types of after-trial criticisms are considered and it is argued (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
1 — 50 / 1000